Alternating direction method of multipliers (ADMM)

ADMM problem form (with ff, gg convex): minimize f(x)+g(z)f(x)+g(z) subject to Ax+Bz=cAx + Bz = c

two sets of variables, with separable objective

augmented Lagrangian: Lρ(x,z,y)=f(x)+g(z)+yT(Ax+Bzc)+ρ2||Ax+Bzc||22L_\rho (x,z,y) = f(x)+g(z)+y^T (Ax + Bz - c) + \frac{\rho}{2} ||Ax+Bz-c||_2^2

ADMM:


#incomplete


linear programming? optimization? Convex Optimization notes


References:

  1. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers (Boyd, Parikh, Chu, Peleato, Eckstein) https://web.stanford.edu/~boyd/papers/pdf/admm_distr_stats.pdf
  2. https://www.stat.cmu.edu/~ryantibs/convexopt/lectures/admm.pdf
  3. https://paperswithcode.com/method/admm
  4. https://www.stat.cmu.edu/~ryantibs/convexopt-F18/lectures/admm.pdf